28 research outputs found

    Supervised learning in medical image registration

    Get PDF
    Image registration is the process of aligning images by finding the spatial relation between the images. Assuming two images called fixed and moving images are taken at different time, different spatial location, or via a different imaging technique, the aim of image registration is to find an optimal transformation that aligns the fixed and the moving images. Performing an automatic fast image registration with less manual finetuning can speed up numerous medical image processing procedures. In addition, an automatic quality assessment of registration can speed up this time-consuming task. In this thesis, we developed a fast learning-based image registration technique called RegNet.Predicting registration error can be useful for evaluation of registration procedures, which is important for the adoption of registration techniques in the clinic. In addition, quantitative error prediction can be helpful in improving the registration quality. In this thesis, we proposed two quality assessment mechanisms using random forests (RF) and convolutional long short term memory (ConvLSTM), in which the latter performs faster and more accurate.LUMC / Geneeskund

    Recovering the Imperfect: Cell Segmentation in the Presence of Dynamically Localized Proteins

    Full text link
    Deploying off-the-shelf segmentation networks on biomedical data has become common practice, yet if structures of interest in an image sequence are visible only temporarily, existing frame-by-frame methods fail. In this paper, we provide a solution to segmentation of imperfect data through time based on temporal propagation and uncertainty estimation. We integrate uncertainty estimation into Mask R-CNN network and propagate motion-corrected segmentation masks from frames with low uncertainty to those frames with high uncertainty to handle temporary loss of signal for segmentation. We demonstrate the value of this approach over frame-by-frame segmentation and regular temporal propagation on data from human embryonic kidney (HEK293T) cells transiently transfected with a fluorescent protein that moves in and out of the nucleus over time. The method presented here will empower microscopic experiments aimed at understanding molecular and cellular function.Comment: Accepted at MICCAI Workshop on Medical Image Learning with Less Labels and Imperfect Data, 202

    Hierarchical prediction of registration misalignment using a convolutional LSTM: application to chest CT scans

    Get PDF
    In this paper we propose a supervised method to predict registration misalignment using convolutional neural networks (CNNs). This task is casted to a classification problem with multiple classes of misalignment: "correct" 0-3 mm, "poor" 3-6 mm and "wrong" over 6 mm. Rather than a direct prediction, we propose a hierarchical approach, where the prediction is gradually refined from coarse to fine. Our solution is based on a convolutional Long Short-Term Memory (LSTM), using hierarchical misalignment predictions on three resolutions of the image pair, leveraging the intrinsic strengths of an LSTM for this problem. The convolutional LSTM is trained on a set of artificially generated image pairs obtained from artificial displacement vector fields (DVFs). Results on chest CT scans show that incorporating multi-resolution information, and the hierarchical use via an LSTM for this, leads to overall better F1 scores, with fewer misclassifications in a well-tuned registration setup. The final system yields an accuracy of 87.1%, and an average F1 score of 66.4% aggregated in two independent chest CT scan studies.Radiolog

    Fast Learning-based Registration of Sparse 3D Clinical Images

    Full text link
    We introduce SparseVM, a method that registers clinical-quality 3D MR scans both faster and more accurately than previously possible. Deformable alignment, or registration, of clinical scans is a fundamental task for many clinical neuroscience studies. However, most registration algorithms are designed for high-resolution research-quality scans. In contrast to research-quality scans, clinical scans are often sparse, missing up to 86% of the slices available in research-quality scans. Existing methods for registering these sparse images are either inaccurate or extremely slow. We present a learning-based registration method, SparseVM, that is more accurate and orders of magnitude faster than the most accurate clinical registration methods. To our knowledge, it is the first method to use deep learning specifically tailored to registering clinical images. We demonstrate our method on a clinically-acquired MRI dataset of stroke patients and on a simulated sparse MRI dataset. Our code is available as part of the VoxelMorph package at http://voxelmorph.mit.edu/.Comment: This version was accepted to CHIL. It builds on the previous version of the paper and includes more experimental result

    Joint registration and segmentation via multi-task learning for adaptive radiotherapy of prostate cancer

    Get PDF
    Medical image registration and segmentation are two of the most frequent tasks in medical image analysis. As these tasks are complementary and correlated, it would be beneficial to apply them simultaneously in a joint manner. In this paper, we formulate registration and segmentation as a joint problem via a Multi-Task Learning (MTL) setting, allowing these tasks to leverage their strengths and mitigate their weaknesses through the sharing of beneficial information. We propose to merge these tasks not only on the loss level, but on the architectural level as well. We studied this approach in the context of adaptive image-guided radiotherapy for prostate cancer, where planning and follow-up CT images as well as their corresponding contours are available for training. At testing time the contours of the follow-up scans are not available, which is a common scenario in adaptive radiotherapy. The study involves two datasets from different manufacturers and institutes. The first dataset was divided into training (12 patients) and validation (6 patients), and was used to optimize and validate the methodology, while the second dataset (14 patients) was used as an independent test set. We carried out an extensive quantitative comparison between the quality of the automatically generated contours from different network architectures as well as loss weighting methods. Moreover, we evaluated the quality of the generated deformation vector field (DVF). We show that MTL algorithms outperform their Single-Task Learning (STL) counterparts and achieve better generalization on the independent test set. The best algorithm achieved a mean surface distance of 1.06 +/- 0.3 mm, 1.27 +/- 0.4 mm, 0.91 +/- 0.4 mm, and 1.76 +/- 0.8 mm on the validation set for the prostate, seminal vesicles, bladder, and rectum, respectively. The high accuracy of the proposed method combined with the fast inference speed, makes it a promising method for automatic re-contouring of follow-up scans for adaptive radiotherapy, potentially reducing treatment related complications and therefore improving patients quality-of-life after treatment. The source code is available at https://github.com/moelmahdy/JRS-MTL.Biological, physical and clinical aspects of cancer treatment with ionising radiatio

    Unsupervised Probabilistic Deformation Modeling for Robust Diffeomorphic Registration

    Get PDF
    International audienceWe propose a deformable registration algorithm based on unsupervised learning of a low-dimensional probabilistic parameterization of deformations. We model registration in a probabilistic and generative fashion, by applying a conditional variational autoencoder (CVAE) network. This model enables to also generate normal or pathological deformations of any new image based on the probabilistic latent space. Most recent learning-based registration algorithms use supervised labels or deformation models, that miss important properties such as diffeomorphism and sufficiently regular deformation fields. In this work, we constrain transformations to be diffeomorphic by using a differentiable exponentiation layer with a symmetric loss function. We evaluated our method on 330 cardiac MR sequences and demonstrate robust intra-subject registration results comparable to two state-of-the-art methods but with more regular deformation fields compared to a recent learning-based algorithm. Our method reached a mean DICE score of 78.3% and a mean Hausdorff distance of 7.9mm. In two preliminary experiments, we illustrate the model's abilities to transport pathological deformations to healthy subjects and to cluster five diseases in the unsupervised deformation encoding space with a classification performance of 70%

    Esophageal tumor segmentation in CT images using a Dilated Dense Attention Unet (DDAUnet)

    Get PDF
    Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with air pockets, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a DSC value of 0.79 +/- 0.20, a mean surface distance of 5.4 +/- 20.2mm and 95% Hausdorff distance of 14.7 +/- 25.0mm for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via https://github.com/yousefis/DenseUnet_Esophagus_Segmentation.Biological, physical and clinical aspects of cancer treatment with ionising radiatio

    Robust Multimodal Image Registration Using Deep Recurrent Reinforcement Learning

    Full text link
    The crucial components of a conventional image registration method are the choice of the right feature representations and similarity measures. These two components, although elaborately designed, are somewhat handcrafted using human knowledge. To this end, these two components are tackled in an end-to-end manner via reinforcement learning in this work. Specifically, an artificial agent, which is composed of a combined policy and value network, is trained to adjust the moving image toward the right direction. We train this network using an asynchronous reinforcement learning algorithm, where a customized reward function is also leveraged to encourage robust image registration. This trained network is further incorporated with a lookahead inference to improve the registration capability. The advantage of this algorithm is fully demonstrated by our superior performance on clinical MR and CT image pairs to other state-of-the-art medical image registration methods

    Novel near-infrared spectroscopy-intravascular ultrasound-based deep-learning methodology for accurate coronary computed tomography plaque quantification and characterization.

    Get PDF
    AIMS: Coronary computed tomography angiography (CCTA) is inferior to intravascular imaging in detecting plaque morphology and quantifying plaque burden. We aim to, for the first time, train a deep-learning (DL) methodology for accurate plaque quantification and characterization in CCTA using near-infrared spectroscopy-intravascular ultrasound (NIRS-IVUS). METHODS AND RESULTS: Seventy patients were prospectively recruited who underwent CCTA and NIRS-IVUS imaging. Corresponding cross sections were matched using an in-house developed software, and the estimations of NIRS-IVUS for the lumen, vessel wall borders, and plaque composition were used to train a convolutional neural network in 138 vessels. The performance was evaluated in 48 vessels and compared against the estimations of NIRS-IVUS and the conventional CCTA expert analysis. Sixty-four patients (186 vessels, 22 012 matched cross sections) were included. Deep-learning methodology provided estimations that were closer to NIRS-IVUS compared with the conventional approach for the total atheroma volume (ΔDL-NIRS-IVUS: -37.8 ± 89.0 vs. ΔConv-NIRS-IVUS: 243.3 ± 183.7 mm3, variance ratio: 4.262, P < 0.001) and percentage atheroma volume (-3.34 ± 5.77 vs. 17.20 ± 7.20%, variance ratio: 1.578, P < 0.001). The DL methodology detected lesions more accurately than the conventional approach (Area under the curve (AUC): 0.77 vs. 0.67, P < 0.001) and quantified minimum lumen area (ΔDL-NIRS-IVUS: -0.35 ± 1.81 vs. ΔConv-NIRS-IVUS: 1.37 ± 2.32 mm2, variance ratio: 1.634, P < 0.001), maximum plaque burden (4.33 ± 11.83% vs. 5.77 ± 16.58%, variance ratio: 2.071, P = 0.004), and calcific burden (-51.2 ± 115.1 vs. -54.3 ± 144.4, variance ratio: 2.308, P < 0.001) more accurately than conventional approach. The DL methodology was able to segment a vessel on CCTA in 0.3 s. CONCLUSIONS: The DL methodology developed for CCTA analysis from co-registered NIRS-IVUS and CCTA data enables rapid and accurate assessment of lesion morphology and is superior to expert analysts (Clinicaltrials.gov: NCT03556644)
    corecore